In this post, Dr Cameron Edmond, Lecturer in Game Development (School of Computing, Faculty of Science & Engineering) shares what happened when Generative AI was set as the topic for an ethics essay in his unit.
While my teaching area is game development, I spent my PhD and postdoctoral life studying the use of algorithms and AI to create literary works. So, when this weird little GPT thing got an update and a fresh lick of paint with ChatGPT, I didn’t share the enthusiasm of my most ardent colleagues, nor the scepticism of the most negative.
All I knew was that – as a technologist and an educator – I would be responsible for teaching students how to navigate the AI landscape around them.
Uncharted Waters
In our second-year game development unit, students write an essay on an ethical issue impacting the games industry. Generative AI is a huge topic: Co-Pilot let’s programmers turn their natural language into code, Ubisoft’s Ghost Writer takes care of basic dialogue writing, and image generators are causing plenty of conversations around copyright and artists’ rights.
So, it made sense to set Generative AI as a topic for the ethics essay this year. However, it would be foolish to think this is where AI’s place in the unit ended. My colleagues in creative writing have mentioned their students – you know, people who have chosen to study writing – are using ChatGPT to complete their assignments. If that’s the case, why wouldn’t my computing students, some of whom hate writing, not follow suite?
Moreover, the use of AI isn’t always easy to detect. There’re a few tell-tale signs, sure, but most of these traits (circular logic and points left hanging) are things I’ve seen from undergraduates before, and I’d rather not accuse a student trying their best of “cheating” using AI.
A colleague over in MaPS (Mathematical & Physical Sciences), Associate Professor Petra Graham, had mentioned the use of an “acknowledgement form”, where students could indicate they had used Grammarly or citation generators in their work, and she’d wondered at its applicability to ChatGPT. I loved this idea, and decided to trial this as a way of allowing student to self-report the use of AI in their essays. (The acknowledgement form used in the unit COMP2160 is on the right).
When writing the assignment specification, I stressed to students we were in uncharted waters, and that this exercise was about conversations and finding our way together.
Here are five lessons I learnt in Semester 2, 2023.
Lesson 1: Scaffolding goes a long way
I was excited for this experiment, but also anxious students might see the acknowledgement form as a “get out of gaol free card”, and simply dump their essay question into ChatGPT then call it a day.
During a live lecture, ChatGPT was fed the assignment spec and asked to write its own ethics essay. The results were bad. Producing around 800 of the necessary 2000 words, ChatGPT managed to talk in circles, occasionally dropping a few ethical mantras and surface-level recommendations. It didn’t do anything worthy of a pass. Students were quick to critique it, pointing out how it never really made any points. In a move that has earnt meme-status among students, it also created citations, like an article from “John A. Doe and Jane B. Smith”, hosted at “academicjournal.com”, and one from “Sarah GameDev”.
Showing students this AI-generated essay allowed us to have an open conversation about what AI could be useful for. We discussed “rubber ducking”, chatting to the AI to get your ideas flowing before writing, or using it to touch-up grammar and spelling (something I advised against, but did mention it was the kind of thing “allowed” in our framework).
Opening up this conversation and positioning AI as a tool via a cautionary tale meant we weren’t just sending students off into the algorithmic wilderness, but instead giving them some a foundation to think critically and be mindful about how they conduct their research and writing. Isn’t that what Universities are all about?
Lesson 2: Students already have views on AI, and they’re not what you think
All told, 13 of our 80 students reported to using AI in their work. Some instead included the acknowledgement form, but declared their work to be 100% human, almost as a badge of honour. This was especially common in essays that had generative AI as the topic, as students chose to “walk the walk” when being critical of AI in game development.
Conversations in class revealed more insights. Some students said it wasn’t the right tool for the job, but could have other uses. One student asked why you would bother with AI when you could just write it yourself in the first place.
It’s easy to get hung up on the idea that all students will be using AI, and that the days of written assessments are over. Students want to learn, and people want to make stuff. Sharing ideas and challenging yourself is part of the human condition. AI isn’t taking that drive away any time soon.
Lesson 3: Students that are using AI aren’t always using it for writing
I was stunned when I opened one submission and found the word count was 9000 words, well over the 2000-2500 we’d specified. But that was because the student asked ChatGPT to proof read their work, and submitted the original draft, ChatGPT’s re-work, and their final submission. The student had also asked for any adjustments to be in bold. They were using ChatGPT as a learning tool, getting changes they could then analyse and incorporate as necessary, rather than just asking the tool to “make it better.”
Others used ChatGPT and ToolBaz to give them examples to explore, then Googling them and finding most didn’t exist. But as they tried to get ideas, they honed their written expression. Each prompt saw them get better at articulating themselves, and learning how to ask the right questions.
Some students used AI in ways that showed gaps in student knowledge. Scribblr was used to generate references from sources, showing that the students didn’t know how to find a source’s citation info. Not a great result, but now we know it’s a problem, and we can intervene.
While some students used AI-driven translation tools, others turned to ChatGPT for the same purpose. I won’t pretend to know the inner workings of ChatGPT, but at the end of the day it isn’t built for translation.
In all of these cases, myself and the other markers now had an opportunity to provide feedback to the students on their use of AI. Rather than them hiding it and losing marks without really knowing why, we could instead point them to other strategies, acknowledge where AI use was effective, and more.
Lesson 4: Give a little, get a little
I’m not naïve. I’m sure there were students who didn’t report their use of AI. But I’m not worried. There will always be students that get a friend to write something for them, sneak through Turnitin by copying from a Reddit post, etc. For handling dishonesty, there is the Academic Integrity team. I was focused on creating a space where students could be open about their curiosity towards emerging technology, and just asked that they respect our assessment processes as well.
Maybe I’m being optimistic, but the results showed me students were willing to meet us half-way.
Lesson 5: Freedom breeds experimentation
I don’t know how many students would have experimented with AI if I hadn’t given them permission. But I do know that some students used AI in ways I hadn’t expected, and learnt what works for them in the process. I always tell my students to not be afraid of experimentation and trying out new ideas, especially when working in a creative industry like game development.
There are very real conversations around the ethics of AI that we must have. This article isn’t the place for me to get on my soapbox (if you spot me on campus, I’m more than happy to do so, but you might regret asking). I remain neither a lover nor a hater of these new AI tools, But I’d much rather my students made up their own minds through experimentation, play and reflection.
Acknowledgements
I didn’t go on this journey alone. My co-lecturer, Dr Malcolm Ryan, also presented during the ethics essay lecture, and had the idea of poking ChatGPT for sources. Kayson Whitehouse and Sandra Trinh assisted with marking the assignment, and provided me with feedback themselves on what they found.
Want to find out how other MQ educators are using AI in teaching? See these other posts on TECHE:
- Artificial Intelligence, real results. Generative AI’s S2 report card
- How FMHHS are guiding students on appropriate use of AI tools
- What to do with quizzes now that ChatGPT quickly aces them
- 11 ways ChatGPT can save you time
- Tuning up an assessment task for the AI world (a case study)
- Assessments and AI… a 3 stage approach
- Safeguarding academic integrity, connecting law students with markers: Assessment via viva voce
- See all articles about generative artificial intelligence on Teche
Banner image: Photo by Owlie Productions on Shutterstock
AI Generated image: Created using DALL-E by Olga Kozar
COMP2160 Acknowledgement form provided by Cameron Edmond
Person at computer: Image by Kelly Sikkema on Unsplash
Share this: